2. Learning Objectives
2
Overview of the Integrated Baseline Review
LO 1
Understand of the motivations for the Performance
Measurement Baseline (PMB) starting with DID
81650
LO 2
Gain the skills in the 6 processes needed to build a
credible PMB, using Risk+ to address DID 81650
LO 3
Develop the framework for schedule, cost, and
technical performance risk categorizations.
LO 4
Gain the skills of executing the PMB, with an
integrated Risk Register to maintain the credibility
of the PMB
LO 5
Establish the processes needed to sustain this
credibility, including Risk+ operations, Risk Register
3. Our Two Day Agenda
3
Day 1 Overview of building a credible Performance Measurement Baseline
08:00 – 08:50 1: Steps to building a credible Performance Measurement Baseline
09:00 – 10:50 2: Individual elements of the Integrated Master Schedule (IMS)
11:00 – 11:50 3: Connecting the dots to an actual IMS
12:00 – 12:50 4: Lunch Break
13:00 – 13:50 5: Example of an Integrated Master Schedule ready for DID 81650
14:00 – 15:50 6: Demonstration of Risk+ integrated with the IMS, and understanding the outcomes
16:00 – 16:50 7: Wrap up for day 1 – feedback from students, corrective actions for Day 2
Day 2 Hands on development of the PMB using DID 81650
08:00 – 08:50 8: IMS structural assessment, gap closure, ready for workshop
09:00 – 10:50 9: Building the risk category values for each Work Package, and updating the risk register
11:00 – 11:50 10: First run of Risk+ and Management Report of confidence of completely on or before planned
date
12:00 – 12:50 11: Lunch Break
13:00 – 13:50 12: Adjusting the IMS with this new information
14:00 – 15:50 13: Building the “baseline–able” IMS compliant with DIDS 81650
16:00 – 16:50 14: Final questions, plans for “phone support,” and any remaining closure plans
4. 4
But First A Warning
We’re going to cover a lot of material in two days
6. Identify Needed
Capabilities
Establish a
Performance
Measurement
Baseline
Execute the
Performance
Measurement
Baseline
Capabilitie
s Based
Plan
Operational
Needs
Earned
Value
Performance
0% /100%
Technical
Performanc
e Measures
System Value
Stream
Technical
Requiremen
ts
Identify
Requirements
Baseline Technical
Performanc
e Measures
PMB
Changes to
Needed Capabilities
Changes to
Requirements Baseline
Changes to
Performance Baseline
6 Deliverables Based Planning ® is a registered trademark of Lewis & Fowler. Copyright ® Lewis & Fowler, 2011
8. Build a time–phased network of activities describing the work to be performed, the budgeted
cost for this work, the organizational elements that produce the deliverables from this work,
and the performance measures showing this work is proceeding according to plan.
Decompose the program Scope into a product based Work Breakdown Structure (WBS), then
further into Work Packages describing the production of the deliverables traceable to the
requirements, and to the needed capabilities.
3.1
Assign responsibility to Work Packages (the groupings of deliverables) to a named owner
accountable for the management of the resource allocations, cost and schedule baseline,
and technical delivery.
3.2
Arrange the Work Packages in a logical network with defined deliverables,
milestones, internal and external dependencies, with credible schedule, cost, and
technical performance margins.
3.3
Develop the Time–Phased Budgeted Cost for Work Scheduled (BCWS) for the labor and
material costs in each Work Package and the Project as a whole. Assure proper resource
allocations can be met and budget profiles match expectations of the program sponsor
3.4
Assign objective Measures of Performance (MoP) and Measures of Effectiveness (MoE) for
each Work Package and summarize these for the Project as a whole.
3.5
Establish a Performance Measurement Baseline (PMB) used to forecast the Work
Package and Project ongoing and completion cost and schedule performance metrics.
3.6
8
9. The Road To Project Success Depends
On …
Where are we going?
How do we get there?
Are there enough resources?
What are impediments to progress?
How do we measure progress?
9 Deliverables Based Planning ® is a registered trademark of Lewis & Fowler. Copyright ® Lewis & Fowler, 2011
10. The PLAN is the strategy for the successful completion of the project. The
SCHEDULE is the sequence of work, the assigned resources, and the
measures of progress that implement the Plan.
Both are needed to increase the Probability of Project Success (PoPS)
1: Steps in building a credible PMBDay 1
Risk
SOW
Cost
WBS
IMP/IMS
TPM
PMB
1 Hour
11. Framework for Increasing the
Probability of Program Success (PoPS)
11
Program
Enablers
Program Process
Capabilities
Business Enablers
12. Just a reminder of the project
elements we have control over
12
13. Risk
SOW
Cost
WBS
IMP/IMS
TPM
PMB
Cost Basis of Estimate
(BOE) built bottom up and
validated top down.
Statement of Work
(SOW) traceable to the
Work Breakdown
Structure and all BOEs
Work Breakdown Structure (WBS) built using MIL-
STD-881C guidance. Products and services only, no
functional departments.
IMP/IMS built using DoD and other guidance to
measure increasing maturity of deliverables.
Technical Performance Measures (TPM) for each
major deliverable in units of measure meaningful to the
decision maker.
13
14. Want Some Motivation for the WBS?
Forces the creation of
detailed steps but
delineating the products
and services that produce
them.
Lays the groundwork for
schedule and budget by
creating “buckets” to
assign resources and
costs.
14
Creates accountability by defining explicit connections
between the work to be performed and those performing
the work.
Creates commitment by making visible to all project
participants the previous three activities.
15. What does a good WBS NOT
look like?
It’s not a laundry list of work to be done.
It’s not a functional decomposition.
It’s not a direct map of the requirements.
It’s not a reflection of the underlying software
partitioning.
It’s not the first structure you might think of…
15
Risk
SOW
Cost
IMP/IMS
TPM
PMB
WBS
16. Connect the WBS to Work Packages
and define the Tasks to produce
Deliverables
Business Need
Process Invoices for Top
Tier Suppliers
1st Level
Electronic Invoice
Submittal
1st Level
Routing to Payables
Department
2nd Level
Payables Account
Verification
2nd Level
Payment Scheduling
2nd Level
Material receipt
verification
2nd Level
“On hand” balance
Updates
Deliverables defined in WP
16
Risk
SOW
Cost
IMP/IMS
TPM
PMB
WBS
17. Establishing the Three Elements of
the Performance Measurement
Baseline
Cost Baseline
Schedule Baseline
Technical Baseline
Perform
Functional
Analysis
Determine
Scope and
Approach
Develop
Technical
Logic
Develop
Technical
Baseline
Develop
WBS
Define
Activities
Estimate
Time
Durations
Sequence
Activities
Finalize
Schedule
Identify
Apportione
d
Milestones
Determine
Resource
Requirement
s
Prepare
Cost
Estimate
Resource
Load
Schedule
Finalize
Apportione
d
Milestones
Determine
Funding
Constraints
Approve
PMB
17
Risk
SOWIMP/IMS
TPM
PMB
WBS
Cost
18. What does a good schedule look
like?
A good schedule is predictive – it shows what
is going to happen in the future and what the
alternatives are if that doesn't actually happen
A good schedule is reflective – it shows
where the project stands in relations to the
planned position against the actual work that
has been accomplished
A good schedule is dynamic – it can be
adjusted when the reality of the project
changes
18
Risk
SOW
Cost
TPM
PMB
WBS
IMP/IMS
19. Improving the credibility of the
schedule
Build the requirements in a tool
Build the PLAN before building the
SCHEDULE
Manage the project with a Project
Management Tool
Make every task duration fit a predefined guide
Use a RACI and RAM to assign accountability
Every task has a deliverable
Have a plan B and a plan C
All cost and durations are random variables
In the end, it’s always about the people
19
Risk
SOW
Cost
TPM
PMB
WBS
IMP/IMS
20. A “thread worn” and corny phrase
that still is the best approach to
success20
21. What does a PLAN Look
Like?21
Risk
SOW
Cost
TPM
PMB
WBS
IMP/IMS
25. DRS-MES
Mapping the steps to the process of building
the Performance Measurement Baseline
The six steps of physically assembling the
Performance Measurements Baseline cover
all the processes of Establishing the PMB.
Each step in the sequence advances the PMB
to its final maturity – ready for baselining
Decompose
Scope
Assign
Responsibility
ArrangeWork
Packages
DevelopBCWS
Assign
Performance
Measures
Set
Performance
Baseline
Perform functional analysis
Determine scope and approach
Develop Work Breakdown Structure
Develop technical logic
Develop technical baseline
Approve performance measure baseline
Define activities
Estimate time durations
Sequence activities
Indentify apportioned milestones
Finalize schedule
Finalized apportioned milestones
Determine resource requirements
Prepare cost estimates
Resource load schedule
Determine funding constraints
25
26. A credible IMS is more than the work, durations, and relationships. It’s an
executable set of activities that implements the program’s strategy – the
PLAN. The IMS buys down risk, provides visibility to project performance,
indicates alternative approaches, and provides actionable information to
the decision makers.
2: Individual Elements of an Integrated Master
ScheduleDay 1
2 Hours
27. Critical Success Factors for the
Performance Measurement Baseline
Deliverables represent the required business
capabilities and its value as defined by the
business and shared by the development team.
When all deliverables and their Work Packages
are completed, they are not revisited or reopened.
They are 100% done.
The progression of Work Packages defines the
increasing maturity of the project.
The business value of the deliverables to the
customer increases as Work Packages are
completed.
Completion of Work Packages is represented by
the Physical Percent Completion of the project.
Either 0%/100% or Apportioned Milestones are used
to state the completion of each Work Package.
Business
Requirements
Technical
Capabilities
Work Packages
Deliverables
27
Individual Elements of the Integrated Master Schedule
28. The Critical Few
1. Estimated durations developed to known
confidence levels.
2. Probability Distributions for categories of
work.
3. Risk parameters for each category of work.
4. Credible sequences of work dependencies.
5. Alterative paths through the network to deal
with uncertainty.
6. Measures of performance in units meaningful
to the decision makers.
28
Individual Elements of the Integrated Master Schedule
29. Let’s Build the Performance
Measurement Baseline Using The
Eight Steps
29http://www.softwaretechnews.com/images/STN_April_09_lores_Page_29_Image_0001.jpg
30. This approach is called Product Development Kaizen and is used by Lean Six Sigma firms to
ferret out the system capabilities before any technical or operational requirements are defined.
Use this to reverse engineer or validate the WBS and connect WHAT with WHY before
proceeding to build the CWBS or confirm the WBS.
30
31. Program
Events
Statement of Work CWBS
Significant
Accomplishments
Accomplishment
Criteria
CDRLs and
Deliverables
Tasks Contained in
Work Packages
Measures the progress to plan using Physical & Complete at the Accomplishment Criteria
(AC) and CWBS level start with making to the following connections
Defines
Aligned Aligned
AlignedAligned
Aligned
Completed SA’s are
entry criteria for
Program Events
Completed Work
Packages are exit
criteria for Tasks
Describes increasing
product maturity as 0/100 or
EVMS SD guidance
Documents the product
maturity that is aligned with
SOW and CWBS
Work necessary to
mature products
grouped by CWBS
Work structure
aligned to
SOW
31
32. Update Contractor
System Spec
Update Program
Development
Allocate
Functional Reqmts
Update Functional
System Design
Develop HWCI
Specifications
Develop SIL
Specifications
Build Astp1
F-18 IRR
SIL Baseline 1.0
Update SIL Test
Cases
Develop Prelim
SIL CSCI
Critical
Component s
AstP 1,2
SSpS 1,2,3
1
2
3
4
6
7
5
8
10
9
11
13
14
15
Update AS Test
I&T on CVN
I&T on LHA
12
Contract Award + 15 days
Systems Requirements Review (SRR)
System Functional Review (SFR)
HW Preliminary Design Review (PDR)
System PDR
EDM 1.0 Baseline
EDM 2.0 Baseline
Mfg Docs
Available
TBD
TRR 1.0
EDM 7-8 TRR
32
Each collection point provides
an assessment of incremental
business or mission value.
Defining these points before
the project starts is the basis of
measuring progress to plan.
Because then you know what
done looks like before it
arrives.
33. Deliverables
WBS
Tasks and Schedule
Business Need
Process Invoices for
Top Tier Suppliers
1st Level
Electronic Invoice
Submittal
1st Level
Routing to Payables
Department
2nd Level
Payables Account
Verification
2nd Level
Payment Scheduling
2nd Level
Material receipt
verification
2nd Level
“On hand” balance
Updates
Work
Package
(WP)
1 2
3
4
6
5 A
B
Deliverables defined in WP
Terminal Node in the WBS
defines the products or
services that produce the
products of the project
Terminal node of the
WBS defined by a Work
Package.
Tasks within the Work
Package produce the
Deliverables
100% Completion of the
deliverables is the measure of
performance for the Work Package
Management of the
Work Package Tasks is
the responsibility of
the WP Manager.
A decomposition of the work
needed to fulfill the business
requirements
33
35. Program Events
Define the availability
of a Capability at a point in
time.
Accomplishments
Represent requirements
that enable Capabilities.
Criteria
Represent Work Packages that
fulfill Requirements.
Work
Package
Work
Package
Work
Package
Work
Package
Work
Package
Work
Package
Work
Package
Work
Package
The increasing maturing of a product or service is described through Events
or Milestones, Accomplishments, Criteria, and Work Packages.
The presence of these capabilities is measured by the Accomplishments and
their Criteria.
Accomplishments are the pre–conditions for the maturity assessment of the
product or service at each Event or Milestone.
Performance of the work activities, Work Packages, Criteria,
Accomplishments, and Events or Milestones is measured in units of “physical
percent complete” by connecting Earned Value with Technical Performance
Measures.
Work
Package
35
38. AC: 005
Task
Task
Task
Task
AC
AC:023
Task
Task
Task
Task
AC
The 100% completed work in AC:005 is needed to start the work in AC:023
In the IMP/IMS paradigm, there is no Task-to-Task connection across
Accomplishment Criteria (AC) boundaries, only within an AC
The AC-to-AC linking states “…all work in the predecessor AC must be complete
before starting the successor work, assuring the minimum of rework due to
partially defined requirements or partially completed products”
38
39. PE: BPE: A
SA: 001
SA: 002
SA: 003
SA: 004
PE: A
Task
Task
Task
AC: 006
The best arrangement has the completion of Event A start the first task in Event B.
All work performed beyond the date of Event A is done at risk.
At PDR (Event A), approval to proceed Event B (CDR) is given
Only long lead items should cross Program Event boundaries
All other work terminates on the Program Event where a formal review of the
planned maturity is conducted – SRR, SFR, PDR, CDR, …
This topology assures a complete assessment of “progress to plan,” is available at
each Program Event
39
SA: 008
PE: B
41. Risk: CEV-037 - Loss of Critical Functions During Descent
Planned Risk Level Planned (Solid=Linked, Hollow =Unlinked, Filled=Complete)
RiskScore
24
22
20
18
16
14
12
10
8
6
4
2
0
Conduct Force and Moment Wind
Develop analytical model to de
Conduct focus splinter review
Conduct Block 1 w ind tunnel te
Correlate the analytical model
Conduct w ind tunnel testing of
Conduct w ind tunnel testing of
Flight Application of Spacecra
CEV block 5 w ind tunnel testin
In-Flight development tests of
Damaged TPS flight test
31.Mar.05
5.Oct.05
3.Apr.06
3.Jul.06
15.Sep.06
1.Jun.07
1.Apr.08
1.Aug.08
1.Apr.09
1.Jan.10
16.Dec.10
1.Jul.11
Risk Response
and Risk ID in
IMS
Milestone Date
traceable between
RM Tool and IMS
41
42. An estimate must contain a confidence
interval and an error band on that
confidence interval to be credible.
Otherwise it’s just a guess.
1. Estimating Duration of
WPs
42
Individual Elements of the Integrated Master Schedule
43. Steps in Building the Work Packages
Step 1 – define what is going to be delivered
to produce business value
One or more Deliverables produced within a Work
Package.
Step 2 – define the effort and duration along
with the confidence levels
Only effort and total duration.
Level of confidence for effort and duration.
43
Individual Elements of the Integrated Master Schedule
44. Define what’s going to be produced to
deliver business value
Step 1 – Define the deliverables and their
apportioned value
Description Deliverable(s)
Apportioned
Milestones
Transaction processing integration
test complete.
Test plan compete and approved
Author – 50%
Approval – 50%
Define integration testing
environment.
Integration Test Plan complete
Test platform equipment defined
Test environment defined
Test Plan – 25%
Equipment List – 50%
Environment – 25%
Business processes defined and
approved.
Business process flow diagram 100%
User acceptance testing defined. User Acceptance Plan Developed 100%
User Acceptance Testing
Conducted.
Test environment operational
User Acceptance Testing
performed with 90% success
UAT errors documented and
allocated for repair in next release
Environment – 20%
UAT Conducted –
70%
Errors documented –
10%
44
Individual Elements of the Integrated Master Schedule
45. Project Deliverables
Notional
Percentage
Allocation
Actual Allocation on
past projects
Requirements / Analysis 20%
Product or Service Design 10%
Product or Service Production 25%
System Integration 10%
System Test Processes 15%
User Acceptance Testing Processes 10%
Individual Elements of the Integrated Master Schedule
46. Define the effort and duration along
with the confidence levels
Step 2 – construct the estimates within
confidence levels
Description
Duration
Duration
Confidence
Effort
Effort
Confidence
Transaction processing integration test
complete
10w 1 2680h 2
Define integration testing environment 4w 1 480h 1
Business processes defined and approved 6w 2 1200h 1
User acceptance testing defined 3w 2 800h 2
User acceptance testing conducted 4w 1 200h 1
46
Individual Elements of the Integrated Master Schedule
47. Questions to the Group Answers from the Group
Can we do this in one (1) year? Sure, no problem
How about one (1) week? Oh not hardly, can’t be done in a week
How about six (6) months? Yea, that might be possible
How about four (4) months? That’s cutting it really close, I’m not sure about the 4 months
How about five (5) months? Yea, that’s be about a short as I’d go
To put this into practice requires more discipline of course. But the principle of a Wide Band
Delphi estimating process is well tested in the field and well documented in the literature.
Using the 20 questions game is an easy way to get to an estimate for duration and effort.
Given a software project element, how long will it take and how much effort is expended over that period.
This effort over duration will provide the cost.
We have this requirement for a customer service interface. The functions can be enumerated and the
core technology is known
Ask the following series of questions
So with 5 questions asked of a group of subject matter experts, we can get an estimate of 5 months with a
variance of 1 month or so on either side. That’s a 20% accuracy on a simple problem in about 30 seconds.
Scale that to larger or more complex problems and more questions – or better questions – and a bit more
thoughtfulness for the questions and you can get within 20%.
Getting to an estimate without having
to understand all the detailed
requirements47
48. Conditions for a discrete Work
Package used for Performance
Measurement Example of Work Package and its use
Discrete
Combined
Rationale for the Performance
Measurement
Outcome of the WP is a technical work
product
Requirements, designs, or test
procedures needs as a set for a
downstream task
Y N
If the WP constrains the start or
completion of a subsequent WP, analyze
schedule variances to determine impact
on downstream activities
Outcome of the WP is a set of technical
work products. An individual work
product is a component of the end work
product may be an input to a subsequent
WP before completion of the set, but is
not itself a constraint
Individual requirements, design or test
within a WP that is an input to a
downstream task but is itself not a
constraint
Y Y
An individual work product is not a
constraint to a downstream task, there is
no need to monitor its progress at the
WP level. It may be combined with
similar work products in a WP. Only the
WP completion must be linked with the
successor activity
Outcome is a scheduled process
required to meet a project objective
The process must be implemented to
achieve planned cost, performance of
schedule – standing up a development
environment
Y N
Outcome is a recurring work product that
does not constrain the start or completion
of another recurring WP
Status reporting or documentation of a
recurring meeting
N Y
Recurring work products, although
scheduled, rarely constrains another
task. There is no significant schedule
impact to downstream tasks
Work scope is general or supportive
Project management, administrative
support
N Y
Multiple Level of Effort tasks may be
combined into one WP supporting detail
of time phased budget at the task level
should be maintainedDerived from, Performance–Based Earned Value®, Paul Solomon and Ralph Young, John Wiley & Sons, 2010
Individual Elements of the Integrated Master Schedule
49. There are two types of Uncertainty
Uncertainty about the
functional and performance
aspects of the program’s
technology that impacts
the produceability of the
product or creates delays
in the schedule
Uncertainty about the
duration and cost of the
activities that deliver the
functional and performance
elements of the program
independent of the
technical risk
49
Technical Programmatic
Individual Elements of the Integrated Master Schedule
50. All elements of a projects, its cost,
schedule, and technical performance, are
random variables. Knowing the underlying
probability distribution of these random
variables is a Critical Success Factor for the
application of Monte Carlo Simulation.
2. Probability Distributions50
Individual Elements of the Integrated Master Schedule
51. Risk
Probability Distribution Function is the
Lifeblood of good planning
Probability of
occurrence as a
function of the
number of
samples
“The number of
times a task
duration
appears in a
Monte Carlo
simulation”
51
52. Risk
Task “Most Likely” ≠ Project “Most
Likely,” Must be Understood by Every
Planner
PERT assumes
probability
distribution of
the project times
is the same as
the tasks on the
critical path.
Because other
paths can
become critical
paths, PERT
consistently
underestimates
the project
completion time.
1 + 1 = 3
3
52
53. Risk
Inputs
Outputs
The Program is a System, Just like the any
other System with complex interactive parts
53
The programmatic and planning dynamics act as a system.
The “system response” is the transfer function between input and
output. Understanding this transfer
function may appear
beyond our interest.
But it is part of the
stochastic dynamic
response to disruptions in
our plans.
“What if” really means
“what if” at this point in the
response curve of the
system.
54. Risk management
is how adults
manage projects.
‒ Tim Lister (IBM
Fellow)
3. Risk Parameters for Planned
Work
54
55. 55
Risk is measured as any
deviation from the original
baseline.
Risk is anything that results in
a variance.
Variance at Completion (VAC)
is the basic measure of risk
encountered by the end of
the contract effort, whether the
risk is rooted in issues related
to planning of scope,
estimating, scheduling, or
technical criteria that are
identified during the normal
56. Risk
Why Probabilistic Risk Analysis is
Often Opposed by Management
Many people do not understand
the underlying statistics
Education, practice, guidance
Many planners lack the formal
probability and statistics training
Education, practice, guidance
Most planners perform
deterministic analysis of
schedules and cost
Risk is hard workThe fact the probabilistic risk analysis is built on uncertainty is seen as
weakness in the planning process, not a strength
Why can’t you know how long it will take or how much it costs?
People tend to think that the “lack of data” is a reason not to perform
probabilistic schedule risk analysis
The exact opposite is true
56
57. Level Likelihood
E Near Certainty
D Highly Likely
C Likely
B Low Likelihood
A Not Likely
Level Technical Performance Schedule Cost
A
Minimal or no consequence to
technical performance
Minimal or no impact Minimal or no impact
B
Minor reduction in technical
performance or supportability
Able to meet key dates
Budget increase or unit
production cost
increases.
< **(1% of Budget)
C
Moderate reduction in technical
performance or supportability with
limited impact on program objectives
Minor schedule slip. Able to
meet key milestones with
no schedule float.
Budget increase or unit
production cost
increase
< **(5% of Budget)
D
Significant degradation in technical
performance or major shortfall in
supportability
Program critical path
affected
Budget increase or unit
production cost
increase
< **(10% of Budget)
E
Severe degradation in technical
performance
Cannot meet key program
milestones.
Slip > X months
Exceeds budget
increase or unit
production cost
threshold
Individual Elements of the Integrated Master Schedule57
This matrix must be built for
each category of risk.
The decision for each dimension
comes from Subject Matter
Experts and the Risk
Management team.
E
D
C
B
A
A B C D E
58. Putting planned work in the right order is an
iterative process. If you think you’ve got it
right the first time, it’s wrong.
If you think you’ve got it right the 3rd time,
you’re getting close.
Use the Monte Carlo Simulator to assess
the impacts of the work order – the near
Critical Path analysis
4. Credible Sequencing of the
Work
58
Individual Elements of the Integrated Master Schedule
59. Attribute Beneficial Outcome from this Attribute
Maturity Flows through
Program Events
Performance measurement is in units of increasing
maturity of the Technical Performance Measures
Each event is a mini authorization to proceed
Single outcome for each
work package (AC)
Measure Physical Percent Complete at the WP level
Use 0/100 for tasks for the vast majority of work
Technical Performance
Measures are explicitly
visible
Connect Cost, Schedule, and Technical Performance
EV does not provide a means of adjust for “off TPM,” but
make your own adjustments to the risk numbers for now
Risk retirement explicitly
visible
Risk retirement is embedded in the IMS
Risk mitigation means waiting until the risk happens
IMS flows vertically 1st
and horizontally 2nd
All work supports the assessment of maturity
Isolate tasks dependencies within a Work Package
No Event linkage except
for long lead items
0/100 requires not partial completion
Decoupled dependences
improves risk
responsiveness
1st round IMS defines a free flowing process
Maintaining this decoupling is key to a “dynamic” IMS
that can respond to the natural changes in the program
59
60. A Quick Review …
The Performance Measurement Baseline (PMB) is a time-phased budget plan
for accomplishing work, against which contract performance is measured. It
includes the budgets assigned to scheduled control accounts and the
applicable indirect budgets. For future effort, not planned to the control
account level, the PMB also includes budgets assigned to higher level
Contractor Work Breakdown Structure (CWBS) elements, and to undistributed
budgets. It does not include management reserve.
— Earned Value Implementation Guide, October 2006
But if you’ve got:
The wrong work, performed in the wrong order,
Work that can’t measured against the Technical Performance Measures,
Insufficient resources to absorb the planned BCWS,
No measure of effectiveness (MOE) or measure of performance (MOP) of
the produced products against the planned outcomes, or
No risk retirement tasks embedded in the IMS…
… THE PMB IS NOT CREDIBLE
60
61. We always need a Plan B and many times
a Plan C.
These paths don’t have to be on baseline,
but they have to be in the mind of the
Program Manager, because when they are
needed, it’s usually too late to discover
them.
5. Identify Alternative Paths61
Individual Elements of the Integrated Master Schedule
63. Branching Probabilities – Simple
Approach
Plan the risk alternatives that
“might” be needed
Each mitigation has a Plan B
branch
Keep alternatives as simple as
possible (maybe one task)
Assess probability of the
alternative occurring
Assign duration and resource
estimates to both branches
Turn off for alternative for a
“success” path assessment
Turn off primary for a “failure” path
assessment
30% Probability
of failure
70% Probability
of success
Plan B
Plan A Current Margin Future Margin
80% Confidence for completion
with current margin
Duration of Plan B Plan A + Margin
63
Individual Elements of the Integrated Master Schedule
64. Managing Margin in the Risk Tolerant
IMS requires the reuse of unused
durations
Programmatic Margin is added
between Development,
Production and Integration &
Test phases
Risk Margin is added to the IMS
where risk alternatives are
identified
Margin that is not used in the
IMS for risk mitigation will be
moved to the next sequence of
risk alternatives
This enables us to buy back
schedule margin for activities further
downstream
This enables us to control the ripple
effect of schedule shifts on Margin
activities
5 Days Margin
5 Days Margin
Plan B
Plan A
Plan B
Plan AFirst Identified Risk Alternative in IMS
Second Identified Risk
Alternative in IMS
3 Days Margin Used
Downstream
Activities shifted to
left 2 days
Duration of Plan B < Plan A + Margin
2 days will be added
to this margin task
to bring schedule
back on track
64
Individual Elements of the Integrated Master Schedule
65. Measures of Performance (MoP), Measures
of Effectiveness (MoE), and Technical
Performance Measures (TPM) are the basis
of measuring “done.”
These measures are used with the
probabilistic confidence to provide
6. Meaningful Measures65
Individual Elements of the Integrated Master Schedule
66. Do We Know How To Measure Value Along
The Way To Our Destination?
66
How do we increase visibility into program performance?
How do we reduce cycle time to deliver the product?
How do we foster accountability?
How do we reduce risk?
How do we start our journey to success?
67. What’s Our Motivation for “Connecting the
Dots?”
67
Technical Performance Measures …
Provide program management with information
to make better decisions,
Increase the probability of delivering a solution
that meets both the requirements and mission
need.
68. Measure of Effectiveness (MoE)
Measures of Effectiveness …
Are stated in units meaningful to the buyer,
Focus on capabilities independent of any
technical implementation,
Are connected to the mission success.
“Technical Measurement,” INCOSE–TP–2003–020–01
68
69. Measure of Performance (MoP)
Measures of Performance are …
Attributes that assure the system has the
capability to perform,
Assessment of the system to assure it meets
design requirements to satisfy the MoE.
“Technical Measurement,” INCOSE–TP–2003–020–01
69
70. Key Performance Parameters (KPP)
Key Performance Parameters …
Have a threshold or objective value,
Characterize the major drivers of performance,
Are considered Critical to Customer (CTC).
“Technical Measurement,” INCOSE–TP–2003–020–01
70
71. Technical Performance Measures
(TPM)
Technical Performance Measures …
Assess design progress,
Define compliance to performance
requirements,
Identify technical risk,
Are limited to critical thresholds,
Include projected performance.
“Technical Measurement,” INCOSE–TP–2003–020–01
71
74. Technical Performance Measures
Trends and Responses
74
25kg
23kg
28kg
26kg
PDRSRRSFRCA TRRCDR
ROM in Proposal
Design Model
Bench Scale Model Measurement
Detailed Design Model
Prototype Measurement
Flight 1st Article
TechnicalPerformanceMeasure
VehicleWeight
75. There are many moving parts in the credible IMS. The Critical Few are
the ones we’ll focus on in these sessions.
3: Connecting the Dots to an Actual IMSDay 1
1 Hour
76. How Can We Measure
Credibility?
Statistical credibility
The probability of completing on or before a date
The probability of cost being some value or less
Program architecture credibility
Can the planned maturity be reached with the
work activities shown in the IMP?
Technical performance credibility
What measures of effectiveness (MOE) and
measures of performance (MOP) are needed to
assure increasing technical maturity?
76
78. The critical few for connecting the
dots
Work durations that have probabilistic work
values
Calibrated – Ordinal – probability distributions
Assignment of risk ranges to classes of work
A Logical flow of work
Work activities are nose to tail
100% complete assessment before starting next
activity
Resource loaded for BCWS to connect cost to
schedule
78
79. Thinking About Risk Categories
Classification Uncertainty Overrun
A Routine, been done before Low 0% to 2%
B Routine, but possible difficulties Medium to Low 2% to 5%
C Development, with little technical difficulty Medium 5% to 10%
D Development, but some technical difficulty Medium High 10% to 15%
E Significant effort, technical challenge High 15% to 25%
F No experience in this area Very High 25% to 50%
These categories can be used to avoid
asking the “3 point” question for each task
This information will be maintained in the
IMS
When updates are made the percentage
change can be applied across all tasks
79
80. First, the major data elements
80
Task to “watch”
(Number3)
Most Likely
(Duration3)
Pessimistic
(Duration2)
Optimistic
(Duration1)
Distribution
(Number1)
81. Before lunch a quick look at the end
81
The height of each box indicates
how often the project complete in
a given interval during the run
The S–Curve shows the
cumulative probability of
completing on or before a given
date.
The standard deviation of the
completion date and the 95%
confidence interval of the expected
completion date are in the same
units as the “most likely remaining
duration” field in the schedule
Date: 9/26/2005 2:14:02 PM
Samples: 500
Unique ID: 10
Name: Task 10
Completion Std Deviation: 4.83 days
95% Confidence Interval: 0.42 days
Each bar represents 2 days
Completion Date
Frequency
CumulativeProbability
3/1/062/10/06 3/17/06
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16 Completion Probability Table
Prob ProbDate Date
0.05 2/17/06
0.10 2/21/06
0.15 2/22/06
0.20 2/22/06
0.25 2/23/06
0.30 2/24/06
0.35 2/27/06
0.40 2/27/06
0.45 2/28/06
0.50 3/1/06
0.55 3/1/06
0.60 3/2/06
0.65 3/3/06
0.70 3/3/06
0.75 3/6/06
0.80 3/7/06
0.85 3/8/06
0.90 3/9/06
0.95 3/13/06
1.00 3/17/06
Task to “watch”
80% confidence
that task will
complete by
3/7/06
83. Let’s look at an IMS that has been populated with the fields and their
contents that is ready for a Risk+ assessment.
We’ll walk through this set up process later, but here’s the complete
product.
5: Example of an IMS ready for DID 81650Day 1
1 Hour
85. Risk+ requires a set up process, an operational process, and an
analysis process to provide meaningful information to the decision
makers.
Risk+ tells us the probability of completing “on or before a date,” at “a
cost or less.”
6: Demonstration of Risk+Day 1
Date: 9/26/2005 2:14:02 PM
Samples: 500
Unique ID: 10
Name: Task 10
Completion Std Deviation: 4.83 days
95% Confidence Interval: 0.42 days
Each bar represents 2 days
Completion Date
Frequency
CumulativeProbability
3/1/062/10/06 3/17/06
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16 Completion Probability Table
Prob ProbDate Date
0.05 2/17/06
0.10 2/21/06
0.15 2/22/06
0.20 2/22/06
0.25 2/23/06
0.30 2/24/06
0.35 2/27/06
0.40 2/27/06
0.45 2/28/06
0.50 3/1/06
0.55 3/1/06
0.60 3/2/06
0.65 3/3/06
0.70 3/3/06
0.75 3/6/06
0.80 3/7/06
0.85 3/8/06
0.90 3/9/06
0.95 3/13/06
1.00 3/17/06
2 Hours
88. What is Monte Carlo Simulation?
A class of computational algorithms that rely
on repeated random sampling to compute their
results.
Useful for simulating systems with many
coupled degrees of freedom.
Used to model phenomena with significant
uncertainty in inputs, such as risk.
Evaluate multidimensional definite integrals
with complicated boundary conditions
88
89. DRS-
MES
Let’s Visit The Risk Classification Again
Classification Uncertainty Overrun
A Routine, been done before Low 0% to 2%
B Routine, but possible difficulties Medium to Low 2% to 5%
C Development, with little technical difficulty Medium 5% to 10%
D Development, but some technical difficulty Medium High 10% to 15%
E Significant effort, technical challenge High 15% to 25%
F No experience in this area Very High 25% to 50%
These classifications can be used to avoid
asking the “3 point” question for each task
This information will be maintained in the
IMS
When updates are made the percentage
change can be applied across all tasks
89
90. DRS-
MES
Guiding the Risk Factor Process
requires careful weighting of each level
of risk
Min Most
Likely
Max
Low 1.0 1.04 1.10
Low+ 1.0 1.06 1.15
Moderate 1.0 1.09 1.24
Moderate+ 1.0 1.14 1.36
High 1.0 1.20 1.55
High+ 1.0 1.30 1.85
Very High 1.0 1.46 2.30
Very High+ 1.0 1.68 3.00
For tasks marked “Low” a reasonable
approach is to score the maximum 10%
greater than the minimum.
The “Most Likely” is then scored as a
geometric progression for the remaining
categories with a common ratio of 1.5
Tasks marked “Very High” are bound at
200% of minimum.
No viable project manager would like a
task grow to three times the planned
duration without intervention
The geometric progress is somewhat
arbitrary but it should be used instead of a
linear progression
90
91. DRS-
MES
Risk+ Quick Overview
Task to “watch”
(Number3)
Most Likely
(Duration3)
Pessimistic
(Duration2)
Optimistic
(Duration1)
Distribution
(Number1)
91
92. Monte Carlo Simulation of Schedule
Risk
The height of each box indicates how often the project complete in
a given interval during the run
The S–Curve shows the cumulative probability of completing on or
before a given date.
The standard deviation of the completion date and the 95%
confidence interval of the expected completion date are in the
same units as the “most likely remaining duration” field in the
schedule.
92
Date: 9/26/2005 2:14:02 PM
Samples: 500
Unique ID: 10
Name: Task 10
Completion Std Deviation: 4.83 days
95% Confidence Interval: 0.42 days
Each bar represents 2 days
Completion Date
Frequency
CumulativeProbability
3/1/062/10/06 3/17/06
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16 Completion Probability Table
Prob ProbDate Date
0.05 2/17/06
0.10 2/21/06
0.15 2/22/06
0.20 2/22/06
0.25 2/23/06
0.30 2/24/06
0.35 2/27/06
0.40 2/27/06
0.45 2/28/06
0.50 3/1/06
0.55 3/1/06
0.60 3/2/06
0.65 3/3/06
0.70 3/3/06
0.75 3/6/06
0.80 3/7/06
0.85 3/8/06
0.90 3/9/06
0.95 3/13/06
1.00 3/17/06
Task to “watch”
80% confidence
that task will
complete by
3/7/06
93. DRS-
MES
Integrating Risk and Schedule
Probabilistic
completion times
change as the
program matures
The efforts that
produce these
improvements must
be traceable in the
IMS
The “error bands” on
the events must
include the risk
mitigation activities
as well
IMS activities show how the “error band” narrows over time.
This is the basis of a “programmatic risk tolerant” IMS
The probabilistic interval becomes more reliable as risk mitigations and
maturity assessments add confidence the to IMS1
Baseline
Plan
80%
Mean
Missed
Launch
Period
Launch
Period
Ready
Early
Oct 07
Nov 07
Dec 07
Jan 08
Feb 08
Mar 08
Apr 08
May 08
Jun 08
Plan
Margin
Current Plan
with risks is the
stochastic schedule
CDR
PDR
SRR
FRR
ATLO
20%
Aug 05 Jan 06 Aug 06 Mar 07 Dec 07 Feb 08
Current Plan
with risks is the
deterministic schedule
Risk
Margin
93
94. DRS-
MES
What Can Confidence Intervals Tell Us
about the validity of the IMS?
As the
program
proceeds so
does
Increasing
accuracy
Reduced
schedule risk
Increasing
visual
confirmation
that success
can be
reached
Current Estimate Accuracy
94
95. DRS-
MES
The Cost Probability Distributions as a
function of the weighted cost drivers
$
Cost Driver (Weight)
Cost = a + bXc
Cost
Estimate
Historical data point
Cost estimating relationship
Standard percent error boundsTechnical Uncertainty
Combined Cost
Modeling and Technical
Uncertainty
Cost Modeling
Uncertainty
95
96. The raw materials for connecting the dots is in place. Let’s test
that statement with feedback and plans for tomorrow
7: Wrap Up and FeedbackDay 1
1 Hour
97. Let’s To Put These Ideas to Work Tomorrow on a Real Project
101. With our “real” IMS let’s look at the structural aspects of the
work efforts before doing any real analysis.
8: Structural Assessment and Gap ClosuresDay 2
1 Hour
102. Integrating the Cost, Schedule and
Technical Risk Model
Cost, Schedule, Technical Model†
WBS
Task 100
Task 101
Task 102
Task 103
Task 104
Task 105
Task 106
Probability
Density
Function
Research the Project
Find Analogies
Ask Endless Questions
Analyze the Results
What can go wrong?
How likely is it to go wrong?
What is the cause?
What is the consequence?
Monte Carlo Simulation
Tool is Mandatory
1.0
.8
.6
.4
.2
0
Days, Facilities, Parts, People
Cumulative Distribution Function
102
103. Start with a “notional” arrangement of
the “Bundles” of Work
WPs should not have intermediate connections to other WPs.
7w
7w
5w
5w
1w 3w 3w
3w
2w
7w
5w
The first
approach is to
have long
running WPs
with negative or
positive lags to
maintain
sequencing.
A better
approach is to
break the WP
into separate
deliverables and
sequence Finish
to Start .
103
104. Schedule Margin
104
DID 81650 defines schedule margin as a
designated buffer and stipulates it is part of the
baseline
106. The simple approach to risk categories is just that “simple.”
We’ll need to understand the concepts of ordinal risk ranking
and the interaction between the risk Probability Distribution
Function (PDF) and the Risk + work processes.
9: Building Risk CategoriesDay 2
2 Hours
108. Risk Ranking of Individual
Tasks108
Risk
Rank
Percent
Variance Notional Interpretation of Risk Ranking
1 – 5% + 10% Normal business, technical & manufacturing
processes are applied
2 – 5% + 15% Normal business & technical processes are
applied; new or innovative manufacturing
processes
3 – 5% + 35% Flight software development & certification
processes
4 – 10% + 25% Build & qualification of flight components,
subsystems & systems
5 – 10% + 35% Flight software qualification
6 – 5% + 175% ISS thermal vacuum acceptance testing
109. Let’s take Risk+ out for a ride on our real schedule and
discover how confident we are on the completion dates.
10: First run of Risk+ on a real scheduleDay 2
1 Hour
111. Now that we’ve seen the pictures, what do we do about them?
What decisions can be made? What adjustments are needed
to increase our confidence in meeting the completion dates.
12: Adjusting the IMS with this new
informationDay 2
1 Hour
112. With this information let’s define how much margin is needed
where to put this margin and how to assess the “probability of
completing on or before a specific date.”
13: Building a Baseline–able IMS compliant with
81650Day 2
2 Hours
113. IMS Improvement Opportunities
All tasks arrange Finish-to-Start
No leads or lags
This allows re-sequencing with little or no effort
Provides visibility to the flow of work
All task work complete before starting the next
work
Fidelity improved through complete vertical
integration
A clear boundary between logical flows
Isolates interactions
Risk distributions are optimized by risk class
and program phase
113
114. IMS Metrics
114
Model
Statistics
Relationshi
p Types
Lead/Lag
Values
Target Dates Network Status
Total
activities
Finish to start FS with
positive lag
Records with any
target date type
Activities completed
Total
milestones
Start to start SS with no
negative lag
Hard targets—
Start on, finish on
Activities in progress
Total
relationships
Finish to finish FF with no
negative lag
Activities past due
Average
task duration
Start to finish Activities
without
predecessors
or successors
Activities with negative
float
Summary tasks Activities with less than
program-defined threshold
Activities with float >100
days
Activities with 1-day
duration
Activities with duration <5
115. Next steps, now that we have an understanding of what to do
and what not to do
Final questions and plansDay 2
1 Hour
The Performance Measurement Baseline (PMB) is the primary assessment document for assuring the credibility of a program plan. The PMB is the baseline of the cost, schedule and deliverables for each Work Package in the plan.
Constructing the PMB requires knowledge of the business requirements, skill in developing the Work Packages that produce the deliverables for these requirements, and discipline in assembling the cost, schedule and relationships between the Work Packages. It is the discipline that requires the most focus for the planners and program controls staff. Without this discipline, the development of a credible baseline is simply not possible.
The concept of a Deliverables Based Plan (DBP) is at the core of the Performance Measurement Baseline (PMB).
Deliverables are the units of measure of progress to plan.
Deliverables are what the customer has paid money for.
Deliverables contain the business capabilities, the associated value that fulfill the requirements of the business plan
Lewis & Fowler’s Deliverables Based Planning® method provides the tools, processes, and training needed to increase the probability of success of NIH projects.
This approach is unique in its integration of the critical success factors for projects, no matter the domain.
Our approach answers the following 5 immutable principles:
Where are we going?
Do we have a definitive description of the needed capabilities and the requirements needed to deliver those capabilities?
How do we get there?
What is the sequence of the work efforts to achieve the plan?
Do we have enough time, resources, and money to get there?
Are the resources properly allocated to the sequence of work activities?
What impediments will we encounter along the way?
Have we captured the risks and their handling plans for all the critical work activities?
How do we know we are making progress?
Can we measure progress to plan in units meaningful to the decision makers?
So let’s try out these ideas on a semi-real enviornment
Here’s a picture of actual output a capabilities development session for a major ERP system integration program.
Starting in the upper left, the needed capabilities are captured through a Product Development Kaizen with the stakeholders in an offsite session. These Kaizens are full contact meetings where the participants work “on the wall” to reveal the capabilities and the attributes of these capabilities.
These sticky notes are then captured in some organizing tool. My favorite for this level of detail is MindJet’s Mind Manager. It is a hierarchical organizing tool that can export its structure to MSFT Project.
No matter what tool you use, you’ve got to have some way to “discover” the business capabilities before moving the next stage of the project.
Without a clear and concise description of the needed capabilities you’ll have a hard time recognizing “Done” when it arrives – if it ever arrives.
We’ve all heard of, or possibly used a system that met all the requirements but failed to provide the business value that was promised.
The development of the Capabilities – preferably in a Concept of Operations document is the foundation for the remaining efforts in increasing the probability of success for your project.
With the Capabilities in hand, the technical and operational requirements become clear – or at least clearer.
The elements of the IMP and IMS along with the cost and technical performance measures are driven by the following sources of data.
The Statement of Work is the starting point
The WBS/CWBS describes the product structure
Task in work packages show the work activities
The CDRLs guide the deliverables at least through PDR
Accomplishment Criteria are the exist criteria for the Work Packages
Significant Accomplishments collect the work products needed to move the program to the next level of maturity
The Program Events are the assessment of this maturity
When someone asks why are we doing this – the answer is here
When we speak about increasing maturity, evolutionary development and measures of physical percent complete – one good way to visualize this is through a product maturity flow diagram.
This diagram is a “real” one from a claims processing ERP system rollout, complete with COTS (Commercial Off The Shelf) integration with legacy systems and custom software development. The worse of both worlds.
The critical success factor here is the define upfront what “done” looks like for all the incremental business value elements of the program.
Each of the circles is a point of maturity, where the delivered product or service can be put to use in some way.
As well the project can be canceled at each of these and the investment put to work. Many people would be unhappy, but the financial aspects of the program would remain intact.
This type of diagram also shows the dependencies between the down stream (right) and up stream (left) elements of the project. What has to come first, before business value can be delivered.
In other domains (defense and government) these diagrams are built during the Product Development Kaizen process to show what must be done to get to “done.”
Now let’s put all this together.
Starting with the WBS, the terminal nodes are represented in project with Work Packages.
You cannot have a project without some form of a Work Breakdown Structure (WBS). Building a good WBS is a day long workshop all in itself.
So how many here have a Work Breakdown Structure for their project?
This may not be a term you’ve heard of before. A Work Package is a lump of work that produces a single outcome. A “package of work” that produces something.
For the schedule putting the Work Packages in the proper order is the starting point for a credible schedule. If you can get the Work Packages in the right order, with their durations defined, then you have the start of the credible schedule.
The next step is to not do nay more scheduling in MSFT project. Instead let the Work Package manager look have the activities inside the Work Package and be done with it.
This is how large Defense and Space programs schedule. There are three levels of schedules mandated by a government “rule,” DID 81650.
For the commercial side there is no mandated regulation. But 881A and the PMI Work Breakdown Structure Guide are the best places to start.
The Master schedule – a top level “picture” what is happening in what order.
The Intermediate Schedule – the sequence of Work Packages.
The Detailed Schedule – the day to day activities of the project.
For IT projects, the Intermediate schedule is a sweet spot. One that connects cost with work and deliverables. One that minimally imposes effort on the team and fits well with the agile world of Corporate IT software development.
Since agile is focused on defining deliverables and measuring progress through working software, it is a natural fit for a WBS.
Now let’s put all this together.
Starting with the WBS, the terminal nodes are represented in project with Work Packages.
You cannot have a project without some form of a Work Breakdown Structure (WBS). Building a good WBS is a day long workshop all in itself.
So how many here have a Work Breakdown Structure for their project?
This may not be a term you’ve heard of before. A Work Package is a lump of work that produces a single outcome. A “package of work” that produces something.
For the schedule putting the Work Packages in the proper order is the starting point for a credible schedule. If you can get the Work Packages in the right order, with their durations defined, then you have the start of the credible schedule.
The next step is to not do nay more scheduling in MSFT project. Instead let the Work Package manager look have the activities inside the Work Package and be done with it.
This is how large Defense and Space programs schedule. There are three levels of schedules mandated by a government “rule,” DID 81650.
For the commercial side there is no mandated regulation. But 881A and the PMI Work Breakdown Structure Guide are the best places to start.
The Master schedule – a top level “picture” what is happening in what order.
The Intermediate Schedule – the sequence of Work Packages.
The Detailed Schedule – the day to day activities of the project.
For IT projects, the Intermediate schedule is a sweet spot. One that connects cost with work and deliverables. One that minimally imposes effort on the team and fits well with the agile world of Corporate IT software development.
Since agile is focused on defining deliverables and measuring progress through working software, it is a natural fit for a WBS.
The collection of work packages, their relationship to the “exit criteria,” and then to the Significant Accomplishments landing on the Program Event is the topology of the Performance Measurement baseline – once the basis of estimate and the Technical Performance Measures are added.
A critical success factor for any program is to make visible the flow of “increasing maturity” for each deliverable. It is necessary to show how the work efforts are sequenced to produce the deliverables. In the absence of the measure of “increasing maturity” these work efforts have not units of measure.
The result is progress is measured as the passage of time and consumption of resources.
What is needed is to measure progress in units of planned compliance with the Technical Performance Measures. Not a few TPMs for the program as a whole. But a planned measure of compliance for each Work Package outcome.
This approach assures that a description of what “done” looks like is available for each work effort.
Here’s a sample of a Microsoft Project file that models the structure shown a few slides back.
One of the new ideas about building the PMB is to only connect the Work Packages together – instead of the Tasks. Tasks within the Work Package can be interconnected.
But between the Work Packages, if there is interconnections the work on the right will be using partially completed work on the left.
If this is done, then the work package on the right will absorb this partially completed work, and when the work on the left is completed, re-work may be needed of anything changed.
Treat the work packages as complete “lumps of work,” with 100% fidelity to the planned deliverable.
This way the work package on the right can start with fully formed raw materials.
The same approach should be taken for the Significant Accomplishments and the Program Events.
Partially completed work should not cross a Program Event boundary unless it is long lead.
Otherwise you are moving to the future “re work” from the past.
Here’s an example of how this looks in a real project, using the PERTChart Expert tool from www.criticaltools.con
The management of risk is how Adults manage projects – Tim Lister
The Integrated Master Schedule must show how risk is being “retired” if that schedule is to be considered credible.
Without this description, when the risk turns into an issue, there will be little time to make the correction and mist likely the reserve assigned to the risk will have been consumed.
Risk Retirement is better than Risk Mitigation.
Last Updated: 4/27/2019
Last Updated: 4/27/2019
Last Updated: 4/27/2019
Last Updated: 4/27/2019
When we say “credible” what do we really mean?
What are the units of measure of credible?
The existence of the Performance Measurement Baseline (PMB) is necessary but not sufficient for program success. In the EVMIG there is mention of the budget plan for accomplishing the work.
The first question is - “what work?”
If we start with a Systems Engineering paradigm, the core measurements of Measures of Effectiveness and Measures of Performance
MOE - Operational measures of success that are closely related to the achievement of the mission or operational objective being evaluated, in the intended operational environment under a specified set of conditions: (1) stated from the customer point of view; (2) focused on the most critical mission performance needs; (3) independent of any particular solution; (4) actual measures at the end of development.
MOP - A criterion used to assess friendly actions that is tied to measuring task accomplishment. Measures that characterize physical or functional attribute relating to the system operation: (1) supplier’s point of view; (2) Measured under specified testing or operational conditions; (3) assesses delivered solution performance against critical system level specified requirements; (4) risk indicators that are monitored progressively.
Last Updated: 4/27/2019
So now we’re back to the problem of measuring credibility in units meaningful to the customer.
Here’s the beginning of this measurement.
Do we know the underlying statistical behavior of the cost, schedule, and technical performance measures?
Do we have a credible architecture for producing and assessing the increasing maturity of the products or services? Not just assessing but also producing.
Can we define what “done” looks like in measures of effectiveness and measures of performance?